A duplication-free quantum neural network for universal approximation
نویسندگان
چکیده
The universality of a quantum neural network refers to its ability approximate arbitrary functions and is theoretical guarantee for effectiveness. A non-universal could fail in completing the machine learning task. One proposal encode data into identical copies tensor product, but this will substantially increase system size circuit complexity. To address problem, we propose simple design duplication-free whose can be rigorously proved. Compared with other established proposals, our model requires significantly fewer qubits shallower circuit, lowering resource overhead implementation. It also more robust against noise easier implement on near-term device. Simulations show that solve broad range classical problems, demonstrating application potential.
منابع مشابه
A Wavelet Neural Network for Function Approximation and Network Optimization A WAVELET NEURAL NETWORK FOR FUNCTION APPROXIMATION AND NETWORK OPTIMIZATION
A new mapping network combined wavelet and neural networks is proposed. The algorithm consists of two process: the selfconstruction of networks and the minimization of errors. In the first process, the network structure is determined by using wavelet analysis. In the second process, the approximation errors are minimized. The merits of the proposed network are as follows: network optimization, ...
متن کاملVerification of an Evolutionary-based Wavelet Neural Network Model for Nonlinear Function Approximation
Nonlinear function approximation is one of the most important tasks in system analysis and identification. Several models have been presented to achieve an accurate approximation on nonlinear mathematics functions. However, the majority of the models are specific to certain problems and systems. In this paper, an evolutionary-based wavelet neural network model is proposed for structure definiti...
متن کاملUniversal Deep Neural Network Compression
Compression of deep neural networks (DNNs) for memoryand computation-efficient compact feature representations becomes a critical problem particularly for deployment of DNNs on resource-limited platforms. In this paper, we investigate lossy compression of DNNs by weight quantization and lossless source coding for memory-efficient inference. Whereas the previous work addressed non-universal scal...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Science China Physics, Mechanics & Astronomy
سال: 2023
ISSN: ['1869-1927', '1674-7348']
DOI: https://doi.org/10.1007/s11433-023-2098-8